When and where to transfer for Bayesian network parameter learning
نویسندگان
چکیده
منابع مشابه
When and Where to Transfer for Bayes Net Parameter Learning.
Learning Bayesian networks from scarce data is a major challenge in real-world applications where data are hard to acquire. Transfer learning techniques attempt to address this by leveraging data from different but related problems. For example, it may be possible to exploit medical diagnosis data from a different country. A challenge with this approach is heterogeneous relatedness to the targe...
متن کاملInductive Transfer for Bayesian Network Structure Learning
We consider the problem of learning Bayes Net structures for related tasks. We present an algorithm for learning Bayes Net structures that takes advantage of the similarity between tasks by biasing learning toward similar structures for each task. Heuristic search is used to find a high scoring set of structures (one for each task), where the score for a set of structures is computed in a princ...
متن کاملBayesian Network Learning with Parameter Constraints
The task of learning models for many real-world problems requires incorporating domain knowledge into learning algorithms, to enable accurate learning from a realistic volume of training data. This paper considers a variety of types of domain knowledge for constraining parameter estimates when learning Bayesian Networks. In particular, we consider domain knowledge that constrains the values or ...
متن کاملBayesian Network Parameter Learning using EM with Parameter Sharing
This paper explores the e↵ects of parameter sharing on Bayesian network (BN) parameter learning when there is incomplete data. Using the Expectation Maximization (EM) algorithm, we investigate how varying degrees of parameter sharing, varying number of hidden nodes, and di↵erent dataset sizes impact EM performance. The specific metrics of EM performance examined are: likelihood, error, and the ...
متن کاملMapReduce for Bayesian Network Parameter Learning using the EM Algorithm
This work applies the distributed computing framework MapReduce to Bayesian network parameter learning from incomplete data. We formulate the classical Expectation Maximization (EM) algorithm within the MapReduce framework. Analytically and experimentally we analyze the speed-up that can be obtained by means of MapReduce. We present details of the MapReduce formulation of EM, report speed-ups v...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Expert Systems with Applications
سال: 2016
ISSN: 0957-4174
DOI: 10.1016/j.eswa.2016.02.011